Review




Structured Review

COMPAS Inc gbdt reference model
(Train-Time Savings from Guessing). Training and test accuracy versus run time <t>for</t> <t>GOSDT</t> (blue) and DL8.5 (gold) on the COMPAS data set with regularization 0.001 and different depth constraints. The black line shows the accuracy of a <t>GBDT</t> model (100 max-depth 3 weak classifiers). Circles show baseline performance (no guessing), stars show performance with all three guessing techniques, and marker size indicates the number of leaves. The displayed confidence bands come from 5-fold cross-validation. DL8.5 requires a depth constraint, so it does not appear in the right-most plots.
Gbdt Reference Model, supplied by COMPAS Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/gbdt reference model/product/COMPAS Inc
Average 90 stars, based on 1 article reviews
gbdt reference model - by Bioz Stars, 2026-04
90/100 stars

Images

1) Product Images from "Fast Sparse Decision Tree Optimization via Reference Ensembles"

Article Title: Fast Sparse Decision Tree Optimization via Reference Ensembles

Journal: Proceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence

doi: 10.1609/aaai.v36i9.21194

(Train-Time Savings from Guessing). Training and test accuracy versus run time for GOSDT (blue) and DL8.5 (gold) on the COMPAS data set with regularization 0.001 and different depth constraints. The black line shows the accuracy of a GBDT model (100 max-depth 3 weak classifiers). Circles show baseline performance (no guessing), stars show performance with all three guessing techniques, and marker size indicates the number of leaves. The displayed confidence bands come from 5-fold cross-validation. DL8.5 requires a depth constraint, so it does not appear in the right-most plots.
Figure Legend Snippet: (Train-Time Savings from Guessing). Training and test accuracy versus run time for GOSDT (blue) and DL8.5 (gold) on the COMPAS data set with regularization 0.001 and different depth constraints. The black line shows the accuracy of a GBDT model (100 max-depth 3 weak classifiers). Circles show baseline performance (no guessing), stars show performance with all three guessing techniques, and marker size indicates the number of leaves. The displayed confidence bands come from 5-fold cross-validation. DL8.5 requires a depth constraint, so it does not appear in the right-most plots.

Techniques Used: Marker, Biomarker Discovery

Training accuracy versus run time for GOSDT (blue) and DL8.5 (gold) with and without guessing strategies. The black line shows the training accuracy of a GBDT model (with 100 max-depth3 estimators).
Figure Legend Snippet: Training accuracy versus run time for GOSDT (blue) and DL8.5 (gold) with and without guessing strategies. The black line shows the training accuracy of a GBDT model (with 100 max-depth3 estimators).

Techniques Used:

DL8.5 and GOSDT use guessed thresholds and guessed lower bounds. CART was trained on the original dataset with no guess. Batree was trained using a random forest reference model with 10 max-depth 3 weak classifiers. The black line shows the training accuracy of a GBDT model (with 100 max-depth 3 estimators).
Figure Legend Snippet: DL8.5 and GOSDT use guessed thresholds and guessed lower bounds. CART was trained on the original dataset with no guess. Batree was trained using a random forest reference model with 10 max-depth 3 weak classifiers. The black line shows the training accuracy of a GBDT model (with 100 max-depth 3 estimators).

Techniques Used:

Performance of GOSDT and DL8.5 trained with threshold/lower bound guesses based on two different reference models and without threshold/lower bound guesses (in red dots) on the Spiral dataset. GOSDT and DL8.5 trained with GBDT of 20 max-depth 3 weak classifiers are comparable to the baselines. λ = 0.01. Results of all depth limits are included.
Figure Legend Snippet: Performance of GOSDT and DL8.5 trained with threshold/lower bound guesses based on two different reference models and without threshold/lower bound guesses (in red dots) on the Spiral dataset. GOSDT and DL8.5 trained with GBDT of 20 max-depth 3 weak classifiers are comparable to the baselines. λ = 0.01. Results of all depth limits are included.

Techniques Used:

Sparsity versus accuracy for GOSDT with EBM threshold guessing and GBDT threshold guessing. Two different tolerances (0.1, 0.08) were used for EBM threshold guesses. 40 decision stumps were used to train GBDT reference model. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.
Figure Legend Snippet: Sparsity versus accuracy for GOSDT with EBM threshold guessing and GBDT threshold guessing. Two different tolerances (0.1, 0.08) were used for EBM threshold guesses. 40 decision stumps were used to train GBDT reference model. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.

Techniques Used:

Sparsity versus accuracy for GOSDT with all three guessings and OCT. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.
Figure Legend Snippet: Sparsity versus accuracy for GOSDT with all three guessings and OCT. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.

Techniques Used:

Time versus accuracy for GOSDT with all three guessings and OCT. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.
Figure Legend Snippet: Time versus accuracy for GOSDT with all three guessings and OCT. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.

Techniques Used:



Similar Products

90
COMPAS Inc gbdt reference model
(Train-Time Savings from Guessing). Training and test accuracy versus run time <t>for</t> <t>GOSDT</t> (blue) and DL8.5 (gold) on the COMPAS data set with regularization 0.001 and different depth constraints. The black line shows the accuracy of a <t>GBDT</t> model (100 max-depth 3 weak classifiers). Circles show baseline performance (no guessing), stars show performance with all three guessing techniques, and marker size indicates the number of leaves. The displayed confidence bands come from 5-fold cross-validation. DL8.5 requires a depth constraint, so it does not appear in the right-most plots.
Gbdt Reference Model, supplied by COMPAS Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/gbdt reference model/product/COMPAS Inc
Average 90 stars, based on 1 article reviews
gbdt reference model - by Bioz Stars, 2026-04
90/100 stars
  Buy from Supplier

Image Search Results


(Train-Time Savings from Guessing). Training and test accuracy versus run time for GOSDT (blue) and DL8.5 (gold) on the COMPAS data set with regularization 0.001 and different depth constraints. The black line shows the accuracy of a GBDT model (100 max-depth 3 weak classifiers). Circles show baseline performance (no guessing), stars show performance with all three guessing techniques, and marker size indicates the number of leaves. The displayed confidence bands come from 5-fold cross-validation. DL8.5 requires a depth constraint, so it does not appear in the right-most plots.

Journal: Proceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence

Article Title: Fast Sparse Decision Tree Optimization via Reference Ensembles

doi: 10.1609/aaai.v36i9.21194

Figure Lengend Snippet: (Train-Time Savings from Guessing). Training and test accuracy versus run time for GOSDT (blue) and DL8.5 (gold) on the COMPAS data set with regularization 0.001 and different depth constraints. The black line shows the accuracy of a GBDT model (100 max-depth 3 weak classifiers). Circles show baseline performance (no guessing), stars show performance with all three guessing techniques, and marker size indicates the number of leaves. The displayed confidence bands come from 5-fold cross-validation. DL8.5 requires a depth constraint, so it does not appear in the right-most plots.

Article Snippet: We ran GOSDT with a GBDT reference model trained using 40 decision stumps and GOSDT with a EBM reference model trained using tolerance 0.1 and 0.08 on the COMPAS and FICO datasets.

Techniques: Marker, Biomarker Discovery

Training accuracy versus run time for GOSDT (blue) and DL8.5 (gold) with and without guessing strategies. The black line shows the training accuracy of a GBDT model (with 100 max-depth3 estimators).

Journal: Proceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence

Article Title: Fast Sparse Decision Tree Optimization via Reference Ensembles

doi: 10.1609/aaai.v36i9.21194

Figure Lengend Snippet: Training accuracy versus run time for GOSDT (blue) and DL8.5 (gold) with and without guessing strategies. The black line shows the training accuracy of a GBDT model (with 100 max-depth3 estimators).

Article Snippet: We ran GOSDT with a GBDT reference model trained using 40 decision stumps and GOSDT with a EBM reference model trained using tolerance 0.1 and 0.08 on the COMPAS and FICO datasets.

Techniques:

DL8.5 and GOSDT use guessed thresholds and guessed lower bounds. CART was trained on the original dataset with no guess. Batree was trained using a random forest reference model with 10 max-depth 3 weak classifiers. The black line shows the training accuracy of a GBDT model (with 100 max-depth 3 estimators).

Journal: Proceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence

Article Title: Fast Sparse Decision Tree Optimization via Reference Ensembles

doi: 10.1609/aaai.v36i9.21194

Figure Lengend Snippet: DL8.5 and GOSDT use guessed thresholds and guessed lower bounds. CART was trained on the original dataset with no guess. Batree was trained using a random forest reference model with 10 max-depth 3 weak classifiers. The black line shows the training accuracy of a GBDT model (with 100 max-depth 3 estimators).

Article Snippet: We ran GOSDT with a GBDT reference model trained using 40 decision stumps and GOSDT with a EBM reference model trained using tolerance 0.1 and 0.08 on the COMPAS and FICO datasets.

Techniques:

Performance of GOSDT and DL8.5 trained with threshold/lower bound guesses based on two different reference models and without threshold/lower bound guesses (in red dots) on the Spiral dataset. GOSDT and DL8.5 trained with GBDT of 20 max-depth 3 weak classifiers are comparable to the baselines. λ = 0.01. Results of all depth limits are included.

Journal: Proceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence

Article Title: Fast Sparse Decision Tree Optimization via Reference Ensembles

doi: 10.1609/aaai.v36i9.21194

Figure Lengend Snippet: Performance of GOSDT and DL8.5 trained with threshold/lower bound guesses based on two different reference models and without threshold/lower bound guesses (in red dots) on the Spiral dataset. GOSDT and DL8.5 trained with GBDT of 20 max-depth 3 weak classifiers are comparable to the baselines. λ = 0.01. Results of all depth limits are included.

Article Snippet: We ran GOSDT with a GBDT reference model trained using 40 decision stumps and GOSDT with a EBM reference model trained using tolerance 0.1 and 0.08 on the COMPAS and FICO datasets.

Techniques:

Sparsity versus accuracy for GOSDT with EBM threshold guessing and GBDT threshold guessing. Two different tolerances (0.1, 0.08) were used for EBM threshold guesses. 40 decision stumps were used to train GBDT reference model. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.

Journal: Proceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence

Article Title: Fast Sparse Decision Tree Optimization via Reference Ensembles

doi: 10.1609/aaai.v36i9.21194

Figure Lengend Snippet: Sparsity versus accuracy for GOSDT with EBM threshold guessing and GBDT threshold guessing. Two different tolerances (0.1, 0.08) were used for EBM threshold guesses. 40 decision stumps were used to train GBDT reference model. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.

Article Snippet: We ran GOSDT with a GBDT reference model trained using 40 decision stumps and GOSDT with a EBM reference model trained using tolerance 0.1 and 0.08 on the COMPAS and FICO datasets.

Techniques:

Sparsity versus accuracy for GOSDT with all three guessings and OCT. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.

Journal: Proceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence

Article Title: Fast Sparse Decision Tree Optimization via Reference Ensembles

doi: 10.1609/aaai.v36i9.21194

Figure Lengend Snippet: Sparsity versus accuracy for GOSDT with all three guessings and OCT. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.

Article Snippet: We ran GOSDT with a GBDT reference model trained using 40 decision stumps and GOSDT with a EBM reference model trained using tolerance 0.1 and 0.08 on the COMPAS and FICO datasets.

Techniques:

Time versus accuracy for GOSDT with all three guessings and OCT. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.

Journal: Proceedings of the ... AAAI Conference on Artificial Intelligence. AAAI Conference on Artificial Intelligence

Article Title: Fast Sparse Decision Tree Optimization via Reference Ensembles

doi: 10.1609/aaai.v36i9.21194

Figure Lengend Snippet: Time versus accuracy for GOSDT with all three guessings and OCT. The black dash line is a GBDT with 100 max-depth 3 weak classifiers.

Article Snippet: We ran GOSDT with a GBDT reference model trained using 40 decision stumps and GOSDT with a EBM reference model trained using tolerance 0.1 and 0.08 on the COMPAS and FICO datasets.

Techniques: